neuronal gaussian process regression
Neuronal Gaussian Process Regression
The brain takes uncertainty intrinsic to our world into account. For example, associating spatial locations with rewards requires to predict not only expected reward at new spatial locations but also its uncertainty to avoid catastrophic events and forage safely. A powerful and flexible framework for nonlinear regression that takes uncertainty into account in a principled Bayesian manner is Gaussian process (GP) regression. Here I propose that the brain implements GP regression and present neural networks (NNs) for it. First layer neurons, e.g.\ hippocampal place cells, have tuning curves that correspond to evaluations of the GP kernel. Output neurons explicitly and distinctively encode predictive mean and variance, as observed in orbitofrontal cortex (OFC) for the case of reward prediction. Because the weights of a NN implementing exact GP regression do not arise with biological plasticity rules, I present approximations to obtain local (anti-)Hebbian synaptic learning rules. The resulting neuronal network approximates the full GP well compared to popular sparse GP approximations and achieves comparable predictive performance.
Review for NeurIPS paper: Neuronal Gaussian Process Regression
Additional Feedback: It would be useful to define the specific task setting up front -- both at training and testing time -- and how this relates to biology. I'm relatively happy with the test time operation: the goal is to define a neural network that takes input locations x* as inputs and returns (approximate) predictive means and variances at the queried location. I have less clarity on how the supervised training phase is handled in a biologically plausible way: in what biologically relevant scenario can the learning rule have direct access to the training outputs? Presumably these are provided by another system, such as a sensory system, but what is the biologically plausible mechanism for the learning rule to have access to them? Along similar lines, the use of the stochastic online learning rule seems to assume a setting where there are a large numbers of (x_i,y_i) training pairs which is an additional assumption that would be good to state up front.
Review for NeurIPS paper: Neuronal Gaussian Process Regression
This paper presents a biologically plausible construction of Gaussian process regression. The 4 reviewers were split into two camps (two strong accepts and two rejects), where one argued that the paper was an exciting and significant contribution to computational neuroscience and the other arguing that the GP construction and empirical evaluation were insufficient for an ML paper. There was extensive discussion, with the ML camp agreeing that they wouldn't argue strongly against acceptance if the work is indeed interesting to computational neuroscience. As NeurIPS includes computational neuroscience as a focus area and the reviewers focusing on that aspect found the work very exciting, it would seem this paper could be quite interesting to researchers in that sub-community.
Neuronal Gaussian Process Regression
The brain takes uncertainty intrinsic to our world into account. For example, associating spatial locations with rewards requires to predict not only expected reward at new spatial locations but also its uncertainty to avoid catastrophic events and forage safely. A powerful and flexible framework for nonlinear regression that takes uncertainty into account in a principled Bayesian manner is Gaussian process (GP) regression. Here I propose that the brain implements GP regression and present neural networks (NNs) for it. First layer neurons, e.g.\ hippocampal place cells, have tuning curves that correspond to evaluations of the GP kernel.